Nobel Reconsiders AI Doom — Michael Levitt on Risk & Centaur Futures | Doom Debates
Update: 2025-12-04
Description
A Nobel laureate who built molecular models explains why recent leaps in compute make AI risk urgent and complicated. (Original: 1 hour → Summary: 4 minutes.)
Host Liron Shapira sits down with Michael Levitt to unpack why the last five years feel different, why intelligence is multi‑dimensional, and how a single measurable axis—“steering power”—helps assess real-world danger. Learn how GPU-driven serendipity, human–AI “centaur” teams, and bad‑actor scenarios interact with threats like engineered pathogens and nuclear proliferation. They debate practical responses—export controls, GPU monitoring, and a possible emergency pause—while stressing trade‑offs between safety and social benefit. Expect clear takes on AI risk, alignment, regulation, compute transparency, and governance.
Listen now to get the key ideas in minutes.
Host Liron Shapira sits down with Michael Levitt to unpack why the last five years feel different, why intelligence is multi‑dimensional, and how a single measurable axis—“steering power”—helps assess real-world danger. Learn how GPU-driven serendipity, human–AI “centaur” teams, and bad‑actor scenarios interact with threats like engineered pathogens and nuclear proliferation. They debate practical responses—export controls, GPU monitoring, and a possible emergency pause—while stressing trade‑offs between safety and social benefit. Expect clear takes on AI risk, alignment, regulation, compute transparency, and governance.
Listen now to get the key ideas in minutes.
Comments
In Channel










